翻訳と辞書
Words near each other
・ Hypenodinae
・ Hypenolobosa
・ Hypenolobosa glechoma
・ Hypenomorpha
・ Hypenorhynchus
・ Hypentelium
・ Hypenula
・ Hypenula cacuminalis
・ Hypenus of Elis
・ Hypephyra
・ Hyper
・ Hyper (magazine)
・ Hyper (television)
・ Hyper (TV channel)
・ Hyper Articles en Ligne
Hyper basis function network
・ Hyper CD-ROM
・ Hyper Crush
・ Hyper distribution
・ Hyper Doll
・ Hyper Duel
・ Hyper Dyne Side Arms
・ Hyper engine
・ Hyper geocode
・ Hyper Groove Party
・ Hyper Hippo Productions
・ Hyper Hyper
・ Hyper IgM syndrome
・ Hyper Iria
・ Hyper Island


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Hyper basis function network : ウィキペディア英語版
Hyper basis function network

In machine learning, a Hyper basis function network, or HyperBF network, is a generalization of radial basis function (RBF) networks concept, where the Mahalanobis-like distance is used instead of Euclidean distance measure. Hyper basis function networks were first introduced by Poggio and Girosi in the 1990 paper “Networks for Approximation and Learning”.〔T. Poggio and F. Girosi (1990). "Networks for Approximation and Learning". ''Proc. of the IEEE'' Vol. 78, No. 9:1481-1497.〕〔R.N. Mahdi, E.C. Rouchka (2011). ("Reduced HyperBF Networks: Regularization by Explicit Complexity Reduction and Scaled Rprop-Based Training" ). ''IEEE Transactions of Neural Networks'' 2:673–686.〕
==Network Architecture==

The typical HyperBF network structure consists of a real input vector x\in \mathbb^n, a hidden layer of activation functions and a linear output layer. The output of the network is a scalar function of the input vector, \phi: \mathbb^n\to\mathbb, is given by
\phi(x)=\sum_^a_j\rho_j(||x-\mu_j||)

where N is a number of neurons in the hidden layer, \mu_j and a_j are the center and weight of neuron j. The activation function \rho_j(||x-\mu_j||) at the HyperBF network takes the following form
\rho_j(||x-\mu_j||)=e^

where R_j is a positive definite d\times d matrix. Depending on the application, the following types of matrices R_j are usually considered〔F. Schwenker, H.A. Kestler and G. Palm (2001). "Three Learning Phases for Radial-Basis-Function Network" ''Neural Netw.'' 14:439-458.〕
* R_j=\frac\mathbb_, where \sigma>0. This case corresponds to the regular RBF network.
* R_j=\frac\mathbb_, where \sigma_j>0. In this case, the basis functions are radially symmetric, but are scaled with different width.
* R_j=diag\left(\frac,...,\frac\right)\mathbb_, where \sigma_>0. Every neuron has an elliptic shape with a varying size.
* Positive definite matrix, but not diagonal.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Hyper basis function network」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.